Implicit iteration methods in Hilbert scales under general smoothness conditions
نویسندگان
چکیده
منابع مشابه
Implicit iteration methods in Hilbert scales
For solving linear ill-posed problems, regularizationmethods are requiredwhen the right-hand side is with some noise. In this paper regularized solutions are obtained by implicit iteration methods in Hilbert scales. By exploiting operator monotonicity of certain functions and interpolation techniques in variable Hilbert scales, we study these methods under general smoothness conditions. Order o...
متن کاملPreconditioning Landweber iteration in Hilbert scales
In this paper we investigate convergence of Landweber iteration in Hilbert scales for linear and nonlinear inverse problems. As opposed to the usual application of Hilbert scales in the framework of regularization methods, we focus here on the case s ≤ 0, which (for Tikhonov regularization) corresponds to regularization in a weaker norm. In this case, the Hilbert scale operator L−2s appearing i...
متن کاملInexact Newton regularization methods in Hilbert scales
We consider a class of inexact Newton regularization methods for solving nonlinear inverse problems in Hilbert scales. Under certain conditions we obtain the order optimal convergence rate result. Mathematics Subject Classification (2000) 65J15 · 65J20 · 47H17
متن کاملConvergence of Least Squares Temporal Difference Methods Under General Conditions
We consider approximate policy evaluation for finite state and action Markov decision processes (MDP) in the off-policy learning context and with the simulation-based least squares temporal difference algorithm, LSTD(λ). We establish for the discounted cost criterion that the off-policy LSTD(λ) converges almost surely under mild, minimal conditions. We also analyze other convergence and bounded...
متن کاملLeast Squares Temporal Difference Methods: An Analysis under General Conditions
We consider approximate policy evaluation for finite state and action Markov decision processes (MDP) with the least squares temporal difference (LSTD) algorithm, LSTD(λ), in an exploration-enhanced learning context, where policy costs are computed from observations of a Markov chain different from the one corresponding to the policy under evaluation. We establish for the discounted cost criter...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Inverse Problems
سال: 2011
ISSN: 0266-5611,1361-6420
DOI: 10.1088/0266-5611/27/4/045012